60 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2023 Journal article Open Access OPEN
NoR-VDPNet++: real-time no-reference image quality metrics
Banterle F., Artusi A., Moreo A., Carrara F., Cignoni P.
Efficiency and efficacy are desirable properties for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or with High Dynamic Range (HDR) imaging. However, it is a daunting task to satisfy both properties simultaneously. On the one side, existing evaluation metrics like HDR-VDP 2.2 can accurately mimic the Human Visual System (HVS), but this typically comes at a very high computational cost. On the other side, computationally cheaper alternatives (e.g., PSNR, MSE, etc.) fail to capture many crucial aspects of the HVS. In this work, we present NoR-VDPNet++, a deep learning architecture for converting full-reference accurate metrics into no-reference metrics thus reducing the computational burden. We show NoR-VDPNet++ can be successfully employed in different application scenarios.Source: IEEE access 11 (2023): 34544–34553. doi:10.1109/ACCESS.2023.3263496
DOI: 10.1109/access.2023.3263496
Project(s): ENCORE via OpenAIRE
Metrics:


See at: IEEE Access Open Access | ieeexplore.ieee.org Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA


2023 Journal article Open Access OPEN
MoReLab: a software for user-assisted 3D reconstruction
Siddique A., Banterle F., Corsini M., Cignoni P., Sommerville D., Joffe C.
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.Source: Sensors (Basel) 23 (2023). doi:10.3390/s23146456
DOI: 10.3390/s23146456
Project(s): EVOCATION via OpenAIRE
Metrics:


See at: Sensors Open Access | ISTI Repository Open Access | www.mdpi.com Open Access | CNR ExploRA


2022 Journal article Unknown
AI-Based media coding standards
Basso A., Ribeca P., Bosi M., Pretto N., Chollet G., Guarise M., Choi M., Chiariglione L., Iacoviello R., Banterle F., Artusi A., Gissi F., Fiandrotti A., Ballocca G., Mazzaglia M., Moskowitz S.
Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) is the first standards organization to develop data coding standards that have artificial intelligence (AI) as their core technology. MPAI believes that universally accessible standards for AI-based data coding can have the same positive effects on AI as standards had on digital media. Elementary components of MPAI standards-AI modules (AIMs)-expose standard interfaces for operation in a standard AI framework (AIF). As their performance may depend on the technologies used, MPAI expects that competing developers providing AIMs will promote horizontal markets of AI solutions that build on and further promote AI innovation. Finally, the MPAI framework licences (FWLs) provide guidelines to intellectual property right (IPR) holders facilitating the availability of compatible licenses to standard users.Source: SMPTE motion imaging journal 131 (2022): 10–20. doi:10.5594/JMI.2022.3160793
DOI: 10.5594/jmi.2022.3160793
Metrics:


See at: SMPTE Motion Imaging Journal Restricted | ieeexplore.ieee.org | CNR ExploRA


2022 Open Access OPEN
Quantum computing algorithms: getting closer to critical problems in computational biology
Marchetti L., Nifosì R., Martelli P. L., Da Pozzo E., Cappello V., Banterle F., Trincavelli M. L., Martini C., D'Elia M.
The recent biotechnological progress has allowed life scientists and physicians to access an unprecedented, massive amount of data at all levels (molecular, supramolecular, cellular and so on) of biological complexity. So far, mostly classical computational efforts have been dedicated to the simulation, prediction or de novo design of biomolecules, in order to improve the understanding of their function or to develop novel therapeutics. At a higher level of complexity, the progress of omics disciplines (genomics, transcriptomics, proteomics and metabolomics) has prompted researchers to develop informatics means to describe and annotate new biomolecules identified with a resolution down to the single cell, but also with a high-throughput speed. Machine learning approaches have been implemented to both the modelling studies and the handling of biomedical data. Quantum computing (QC) approaches hold the promise to resolve, speed up or refine the analysis of a wide range of these computational problems. Here, we review and comment on recently developed QC algorithms for biocomputing, with a particular focus on multi-scale modelling and genomic analyses. Indeed, differently from other computational approaches such as protein structure prediction, these problems have been shown to be adequately mapped onto quantum architectures, the main limit for their immediate use being the number of qubits and decoherence effects in the available quantum machines. Possible advantages over the classical counterparts are highlighted, along with a description of some hybrid classical/quantum approaches, which could be the closest to be realistically applied in biocomputation.Source: Briefings in bioinformatics 23 (2022). doi:10.1093/bib/bbac437
DOI: 10.1093/bib/bbac437
Metrics:


See at: academic.oup.com Open Access | Briefings in Bioinformatics Open Access | ISTI Repository Open Access | Briefings in Bioinformatics Restricted | CNR ExploRA


2021 Contribution to book Restricted
Virtual clones for cultural heritage applications
Potenziani M., Banterle F., Callieri M., Dellepiane M., Ponchio F., Scopigno R.
Digital technologies are now mature for producing high quality digital replicas of Cultural Heritage (CH) artifacts. The research results produced in the last decade have shown an impressive evolution and consolidation of the technologies for acquiring high-quality digital 3D models, encompassing both geometry and color (or, better, surface reflectance properties). Some recent technologies for constructing 3D models enriched by a high-quality encoding of the color attribute will be presented. The focus of this paper is to show and discuss practical solutions, which could be deployed without requiring the installation of a specific or sophisticated acquisition lab setup. In the second part of this paper, we focus on new solutions for the interactive visualization of complex models, adequate for modern communication channels such as the web and the mobile platforms. Together with the algorithms and approaches, we show also some practical examples where high-quality 3D models have been used in CH research, restoration and conservation.Source: From Pen to Pixel - Studies of the Roman Forum and the Digital Future of World Heritage, edited by Fortini Patrizia, Krusche Krupali, pp. 225–233. Roma: L'Erma di Bretschneider, 2021

See at: www.lerma.it Restricted | CNR ExploRA


2021 Conference article Open Access OPEN
Collaborative visual environments for evidence taking in digital justice: a design concept
Erra U., Capece N., Lettieri N., Fabiani E., Banterle F., Cignoni P., Dazzi P., Aleotti J., Monica R.
In recent years, Spatial Computing (SC) has emerged as a novel paradigm thanks to the advancements in Extended Reality (XR), remote sensing, and artificial intelligence. Computers are nowadays more and more aware of physical environments (i.e. objects shape, size, location and movement) and can use this knowledge to blend technology into reality seamlessly, merge digital and real worlds, and connect users by providing innovative interaction methods. Criminal and civil trials offer an ideal scenario to exploit Spatial Computing. The taking of evidence, indeed, is a complex activity that not only involves several actors (judges, lawyers, clerks, advi- sors) but it often requires accurate topographic surveys of places and objects. Moreover, another essential means of proof, the "judi- cial experiments" - reproductions of real-world events (e.g. a road accident) the judge uses to evaluate if and how a given fact has taken place - could be usefully carried out in virtual environments. In this paper we propose a novel approach for digital justice based on a multi-user, multimodal virtual collaboration platform that enables technology-enhanced acquisition and analysis of trial evidence.Source: FRAME'21 - 1st Workshop on Flexible Resource and Application Management on the Edge, pp. 41–44, Sweden, Virtual Event, 25/06/2021
DOI: 10.1145/3452369.3463820
Project(s): ACCORDION via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | dl.acm.org Restricted | CNR ExploRA


2021 Conference article Open Access OPEN
A deep learning method for frame selection in videos for structure from motion pipelines
Banterle F., Gong R., Corsini M., Ganovelli F., Van Gool L., Cignoni P.
Structure-from-Motion (SfM) using the frames of a video sequence can be a challenging task because there is a lot of redundant information, the computational time increases quadratically with the number of frames, there would be low-quality images (e.g., blurred frames) that can decrease the final quality of the reconstruction, etc. To overcome all these issues, we present a novel deep-learning architecture that is meant for speeding up SfM by selecting frames using predicted sub-sampling frequency. This architecture is general and can learn/distill the knowledge of any algorithm for selecting frames from a video for generating high-quality reconstructions. One key advantage is that we can run our architecture in real-time saving computations while keeping high-quality results.Source: ICIP 2021 - 28th IEEE International Conference on Image Processing, pp. 3667–3671, Anchorage, Alaska, USA, 19-22/09/2021
DOI: 10.1109/icip42928.2021.9506227
Project(s): ENCORE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2021 Contribution to conference Open Access OPEN
Proceedings - Web3D 2021: 26th ACM International Conference on 3D Web Technology
Ganovelli F., Mc Donald C., Banterle F., Potenziani M., Callieri M., Jung Y.
The annual ACM Web3D Conference is a major event which unites researchers, developers, entrepreneurs, experimenters, artists and content creators in a dynamic learning environment. Attendees share and explore methods of using, enhancing and creating new 3D Web and Multimedia technologies such as X3D, VRML, Collada, MPEG family, U3D, Java3D and other technologies. The conference also focuses on recent trends in interactive 3D graphics, information integration and usability in the wide range of Web3D applications from mobile devices to high-end immersive environments.Source: New York: ACM, Association for computing machinery, 2021
DOI: 10.1145/3485444
Metrics:


See at: dl.acm.org Open Access | ISTI Repository Open Access | CNR ExploRA


2020 Journal article Closed Access
Turning a Smartphone Selfie into a Studio Portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Erra U., Potel M.
We introduce a novel algorithm that turns a flash selfie taken with a smartphone into a studio-like photograph with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in a controlled environment. For each pair, we have one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend lighting artifacts introduced by a close-up camera flash, such as specular highlights, shadows, and skin shine.Source: IEEE computer graphics and applications 40 (2020): 140–147. doi:10.1109/MCG.2019.2958274
DOI: 10.1109/mcg.2019.2958274
Metrics:


See at: IEEE Computer Graphics and Applications Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2020 Conference article Open Access OPEN
Nor-Vdpnet: a no-reference high dynamic range quality metric trained on Hdr-Vdp 2
Banterle F., Artusi A., Moreo A., Carrara F.
HDR-VDP 2 has convincingly shown to be a reliable metric for image quality assessment, and it is currently playing a remarkable role in the evaluation of complex image processing algorithms. However, HDR-VDP 2 is known to be computationally expensive (both in terms of time and memory) and is constrained to the availability of a ground-truth image (the so-called reference) against to which the quality of a processed imaged is quantified. These aspects impose severe limitations on the applicability of HDR-VDP 2 to realworld scenarios involving large quantities of data or requiring real-time responses. To address these issues, we propose Deep No-Reference Quality Metric (NoR-VDPNet), a deeplearning approach that learns to predict the global image quality feature (i.e., the mean-opinion-score index Q) that HDRVDP 2 computes. NoR-VDPNet is no-reference (i.e., it operates without a ground truth reference) and its computational cost is substantially lower when compared to HDR-VDP 2 (by more than an order of magnitude). We demonstrate the performance of NoR-VDPNet in a variety of scenarios, including the optimization of parameters of a denoiser and JPEG-XT.Source: IEEE International Conference on Image Processing (ICIP 2020), pp. 126–130, Abu Dhabi, United Arab Emirates, United Arab Emirates, 25/10/2020-28/10/2020
DOI: 10.1109/icip40778.2020.9191202
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE, RISE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | ISTI Repository Open Access | zenodo.org Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
DeepFlash: turning a flash selfie into a studio portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Scopigno R., Erra U.
We present a method for turning a flash selfie taken with a smartphone into a photograph as if it was taken in a studio setting with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in an ad-hoc acquisition campaign. Each pair consists of one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend defects introduced by a close-up camera flash, such as specular highlights, shadows, skin shine, and flattened images.Source: Signal processing. Image communication 77 (2019): 28–39. doi:10.1016/j.image.2019.05.013
DOI: 10.1016/j.image.2019.05.013
DOI: 10.48550/arxiv.1901.04252
Metrics:


See at: arXiv.org e-Print Archive Open Access | Signal Processing Image Communication Open Access | ISTI Repository Open Access | Signal Processing Image Communication Restricted | doi.org Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
High dynamic range point clouds for real-time relighting
Sabbadin M., Palma G., Banterle F., Boubekeur T., Cignoni P.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: Computer graphics forum (Online) 38 (2019): 513–525. doi:10.1111/cgf.13857
DOI: 10.1111/cgf.13857
Project(s): EMOTIVE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | diglib.eg.org Restricted | Computer Graphics Forum Restricted | CNR ExploRA


2019 Conference article Open Access OPEN
HMD-TMO: A Tone Mapping Operator for 360 degrees HDR Images Visualization for Head Mounted Displays
Goude I., Cozot R., Banterle F.
We propose a Tone Mapping Operator, denoted HMD-TMO, dedicated to the visualization of 360 degrees High Dynamic Range images on Head Mounted Displays. The few existing studies about this topic have shown that the existing Tone Mapping Operators for classic 2D images are not adapted to 360 degrees High Dynamic Range images. Consequently, several dedicated operators have been proposed. Instead of operating on the entire 360 degrees image, they only consider the part of the image currently viewed by the user. Tone mapping a part of the 360 degrees image is less challenging as it does not preserve the global luminance dynamic of the scene. To cope with this problem, we propose a novel tone mapping operator which takes advantage of both a view-dependant tone mapping that enhances the contrast, and a Tone Mapping Operator applied to the entire 360 degrees image that preserves global coherency. Furthermore, we present a subjective study to model lightness perception in a Head Mounted Display.Source: Computer Graphics International Conference (CGI 2019), pp. 216–227, Calgary, Canada, 17/06/2019 - 20/06/2019
DOI: 10.1007/978-3-030-22514-8_18
Project(s): ReVeRY via OpenAIRE
Metrics:


See at: hal.archives-ouvertes.fr Open Access | ISTI Repository Open Access | vcg.isti.cnr.it Open Access | doi.org Restricted | HAL-Rennes 1 Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Efficient evaluation of image quality via deep-learning approximation of perceptual metrics
Artusi A., Banterle F., Moreo A., Carrara F.
Image metrics based on Human Visual System (HVS) play a remarkable role in the evaluation of complex image processing algorithms. However, mimicking the HVS is known to be complex and computationally expensive (both in terms of time and memory), and its usage is thus limited to a few applications and to small input data. All of this makes such metrics not fully attractive in real-world scenarios. To address these issues, we propose Deep Image Quality Metric ( DIQM ), a deep-learning approach to learn the global image quality feature ( mean-opinion-score ). DIQM can emulate existing visual metrics efficiently, reducing the computational costs by more than an order of magnitude with respect to existing implementations.Source: IEEE transactions on image processing (Online) 29 (2019): 1843–1855. doi:10.1109/TIP.2019.2944079
DOI: 10.1109/tip.2019.2944079
Project(s): ENCORE via OpenAIRE, RISE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | ZENODO Open Access | IEEE Transactions on Image Processing Open Access | IEEE Transactions on Image Processing Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Developing the ArchAIDE application: A digital workflow for identifying, organising and sharing archaeological pottery using automated image recognition
Anichini F., Banterle F., Buxeda I Garrigós J., Calleri M., Dershowitz N., Diaz D. L., Evans T., Gattiglia G., Gualandi M. L., Hervas M. A., Itkin B., Madrid I Fernandez M, Miguel Gascón E., Remmy M., Richards J., Scopigno R., Vila L., Wolf L., Wright H., Zallocco M.
Every day, archaeologists are working to discover and tell stories using objects from the past, investing considerable time, effort and funding to identify and characterise individual finds. Pottery is of fundamental importance for the comprehension and dating of archaeological contexts, and for understanding the dynamics of production, trade flows, and social interactions. Today, characterisation and classification of ceramics are carried out manually, through the expertise of specialists and the use of analogue catalogues held in archives and libraries. While not seeking to replace the knowledge and expertise of specialists, the ArchAIDE project (archaide.eu) worked to optimise and economise identification process, developing a new system that streamlines the practice of pottery recognition in archaeology, using the latest automatic image recognition technology. At the same time, ArchAIDE worked to ensure archaeologists remained at the heart of the decision-making process within the identification workflow, and focussed on optimising tasks that were repetitive and time consuming. Specifically, ArchAIDE worked to support the essential classification and interpretation work of archaeologists (during both fieldwork and post-excavation analysis) with an innovative app for tablets and smartphones. This paper summarises the work of this three-year project, funded by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement N.693548, with a consortium of partners which has representing both the academic and industry-led ICT domains, and the academic and development-led archaeology domains. The collaborative work of the archaeological and technical partners created a pipeline where potsherds are photographed, their characteristics compared against a trained neural network, and the results returned with suggested matches from a comparative collection with typical pottery types and characteristics. Once the correct type is identified, all relevant information for that type is linked to the new sherd and stored within a database that can be shared online.Source: Internet archaeology 52 (2019). doi:10.11141/ia.52.7
DOI: 10.11141/ia.52.7
Project(s): ArchAIDE via OpenAIRE
Metrics:


See at: Internet Archaeology Open Access | Diposit Digital de la Universitat de Barcelona Open Access | Internet Archaeology Open Access | ISTI Repository Open Access | CNR ExploRA


2019 Conference article Closed Access
Image sets compression via patch redundancy
Corsini M., Banterle F., Ponchio F., Cignoni P.
In the last years, the development of compression algorithms for image collections (e.g., photo albums) has become very popular due to the enormous diffusion of digital photographs. Typically, current solutions create an image sequence from images of the photo album to make them suitable for compression using a High Performance Video Coding (HEVC) encoder. In this study, we investigated a different approach to compress a collection of similar images. Our main idea is to exploit the inter- and intra- patch redundancy to compress the entire set of images. In practice, our approach is equivalent to compress the image set with Vector Quantization (VQ) using a global codebook. Our tests show that our clusterization algorithm is effective for a large number of images.Source: EUVIP 2019 - 8th European Workshop on Visual Information Processing, pp. 10–15, Roma, Italy, 28-31 October 2019
DOI: 10.1109/euvip47703.2019.8946237
Metrics:


See at: doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2018 Report Open Access OPEN
High dynamic range expansion of point clouds for real-time relighting
Sabbadin M., Palma G., Banterle F., Boubekeur T., Cignoni P.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the genuine light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First of all, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene, that may only cover part of it. We perform efficiently this expansion by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions which are not covered by the renderings or with low quality dynamic range by solving a Poisson's system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step with respect to the perfect ground truth. We also report experiments on real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Source: ISTI Technical reports, 2018

See at: ISTI Repository Open Access | CNR ExploRA


2018 Journal article Open Access OPEN
Automatic saturation correction for dynamic range management algorithms
Artusi A., Pouli T., Banterle F., Akyuz A. O.
High dynamic range (HDR) images require tone reproduction to match the range of values to the capabilities of a display. For computational reasons and given the absence of fully calibrated imagery, rudimentary color reproduction is often added as a post-processing step rather than integrated into tone reproduction algorithms. In the general case, this currently requires manual parameter tuning, and can be automated only for some global tone reproduction operators by inferring parameters from the tone curve. We present a novel and fully automatic saturation correction technique, suitable for any tone reproduction operator (including inverse tone reproduction), which exhibits fewer distortions in hue and luminance reproduction than the current state-of-the-art. We validated its comparative effectiveness through subjective experiments and objective metrics. Our experiments confirm that saturation correction significantly contributes toward the perceptually plausible color reproduction of tonemapped content and would, therefore, be useful in any color-critical application.Source: Signal processing. Image communication 63 (2018): 100–112. doi:10.1016/j.image.2018.01.011
DOI: 10.1016/j.image.2018.01.011
Project(s): KIOS CoE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | ZENODO Open Access | Signal Processing Image Communication Open Access | Signal Processing Image Communication Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Fine-grained detection of inverse tone mapping in HDR images
Fan W., Valenzise G., Banterle F., Dufaux F.
High dynamic range (HDR) imaging enables to capture the full range of physical luminance of a real world scene, and is expected to progressively replace traditional low dynamic range (LDR) pictures and videos. Despite the increasing HDR popularity, very little attention has been devoted to new forensic problems that are characteristic to this content. In this paper, we address for the first time such kind of problem, by identifying the source of an HDR picture. Specifically, we consider the two currently most common techniques to generate an HDR image: by fusing multiple LDR images with different exposure time, or by inverse tone mapping an LDR picture. We show that, in order to apply conventional forensic tools to HDR images, they need to be properly preprocessed, and we propose and evaluate a few simple HDR forensic preprocessing strategies for this purpose. In addition, we propose a new forensic feature based on Fisher scores, calculated under Gaussian mixture models. We show that the proposed feature outperforms the popular SPAM features in classifying the HDR image source on image blocks as small as 3 x 3, which makes our method suitable to detect composite forgeries combining HDR patches originating from different acquisition processes.Source: Signal processing (Print) 152 (2018): 178–188. doi:10.1016/j.sigpro.2018.05.028
DOI: 10.1016/j.sigpro.2018.05.028
Metrics:


See at: Signal Processing Open Access | ISTI Repository Open Access | Signal Processing Restricted | Hyper Article en Ligne Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2017 Contribution to book Closed Access
Creating HDR video using retargeting
Banterle F., Unger J.
This chapter presented an overview of two methods for augmenting SDR video sequences with HDR information in order to create HDR videos. The goal of the two methods is to fill in saturated regions in the SDR video frames by retargeting non-saturated image data from a sparse set of HDR images.Source: High Dynamic Range Video: Concepts, Technologies and Applications, edited by Chalmers, A.; Campisi, P.; Shirley, P.; Olaizola, I., pp. 45–59, 2017
DOI: 10.1016/b978-0-12-809477-8.00002-9
Metrics:


See at: doi.org Restricted | www.sciencedirect.com Restricted | CNR ExploRA